Optimal deep neural networks for sparse recovery via Laplace techniques
نویسندگان
چکیده
This paper introduces Laplace techniques to design an optimal neural network for estimation of simplex-valued sparse vectors from compressed measurements. To this end, we recast the problem of MMSE estimation (w.r.t. a prescribed uniform input distribution) as centroid computation of some polytope that is an intersection of the simplex and an affine subspace determined by the measurements. Due to a specific structure, the centroid can be computed analytically by extending a recent result by Lasserre that facilitates the volume computation of polytopes via Laplace transformations. Interestingly, the computation of volume and centroid can be performed by a classical deep neural network comprising threshold functions, rectified linear (ReLU) and rectified polynomial (ReP) activation functions. The proposed construction of a deep neural network for sparse recovery is completely analytical, which allows for bypassing time-consuming training procedures. Furthermore, we show that the number of layers in our construction is equal to the number of measurements which might enable novel lowlatency sparse recovery algorithms for a larger class of signals than that assumed in this paper. To assess the applicability of the proposed uniform input distribution, we showcase the recovery performance on samples that are soft-classification vectors generated by two standard datasets. As both volume and centroid computation are known to be computationally hard, the network width grows exponentially in the worst-case. However, the width may be reduced by inducing sparse connectivity in the neural network via a well-suited basis of the affine subspace. Finally, we point out that our analytical construction may serve as a viable initialization to be further optimized and trained using particular input datasets at hand.
منابع مشابه
Posterior Concentration for Sparse Deep Learning
Spike-and-Slab Deep Learning (SS-DL) is a fully Bayesian alternative to Dropout for improving generalizability of deep ReLU networks. This new type of regularization enables provable recovery of smooth input-output maps with unknown levels of smoothness. Indeed, we show that the posterior distribution concentrates at the near minimax rate for α-Hölder smooth maps, performing as well as if we kn...
متن کاملDirect Load Control of Thermostatically Controlled Loads Based on Sparse Observations Using Deep Reinforcement Learning
This paper considers a demand response agent that must find a near-optimal sequence of decisions based on sparse observations of its environment. Extracting a relevant set of features from these observations is a challenging task and may require substantial domain knowledge. One way to tackle this problem is to store sequences of past observations and actions in the state vector, making it high...
متن کاملFraming U-Net via Deep Convolutional Framelets: Application to Sparse-view CT
X-ray computed tomography (CT) using sparse projection views is often used to reduce the radiation dose. However, due to the insufficient projection views, a reconstruction approach using the filtered back projection (FBP) produces severe streaking artifacts. Recently, deep learning approaches using large receptive field neural networks such as U-net have demonstrated impressive performance for...
متن کاملSupervised Deep Sparse Coding Networks
In this paper, we propose a novel multilayer sparse coding network capable of efficiently adapting its own regularization parameters to a given dataset. The network is trained end-to-end with a supervised task-driven learning algorithm via error backpropagation. During training, the network learns both the dictionaries and the regularization parameters of each sparse coding layer so that the re...
متن کاملShort-Term Memory Capacity in Networks via the Restricted Isometry Property
Cortical networks are hypothesized to rely on transient network activity to support short-term memory (STM). In this letter, we study the capacity of randomly connected recurrent linear networks for performing STM when the input signals are approximately sparse in some basis. We leverage results from compressed sensing to provide rigorous nonasymptotic recovery guarantees, quantifying the impac...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- CoRR
دوره abs/1709.01112 شماره
صفحات -
تاریخ انتشار 2017